我们研究了使用经验风险最小化训练的机器学习模型中删除用户数据的问题。我们的重点是学习算法,这些算法返回经验风险最小化和近似符合符合流式传输缩写的删除请求的近似学习算法。利用Infintesimal Jacknife,我们开发了一种在线学习算法,既是计算和内存效率又有效的。与先前的记忆有效学习算法不同,我们针对的模型可以最大程度地减少非平滑正则化机构的目标,例如常用的$ \ ell_1 $,弹性网或核量规范惩罚。我们还提供与最先进的方法一致的概括,删除能力和学习保证。在各种基准数据集中,我们的算法在先验方法的运行时间上有所改善,同时保持相同的内存需求和测试准确性。最后,我们通过证明到目前为止引入的所有近似近似学习算法在问题设置中未能在常见的超参数调谐方法(例如交叉验证)中使用的所有近似近似学习算法来打开新的询问方向。
translated by 谷歌翻译
能够分析和量化人体或行为特征的系统(称为生物识别系统)正在使用和应用变异性增长。由于其从手工制作的功能和传统的机器学习转变为深度学习和自动特征提取,因此生物识别系统的性能增加到了出色的价值。尽管如此,这种快速进步的成本仍然尚不清楚。由于其不透明度,深层神经网络很难理解和分析,因此,由错误动机动机动机的隐藏能力或决定是潜在的风险。研究人员已经开始将注意力集中在理解深度神经网络及其预测的解释上。在本文中,我们根据47篇论文的研究提供了可解释生物识别技术的当前状态,并全面讨论了该领域的发展方向。
translated by 谷歌翻译
对表格数据的深度学习的最新工作表明了深层表格模型的强劲表现,通常会弥合梯度增强的决策树和神经网络之间的差距。除了准确性之外,神经模型的主要优点是它们学习可重复使用的功能,并且在新域中很容易进行微调。该属性通常在计算机视觉和自然语言应用中被利用,在特定于任务的培训数据稀缺时,转移学习是必不可少的。在这项工作中,我们证明上游数据使表格神经网络比广泛使用的GBDT模型具有决定性的优势。我们为表格转移学习提出了一个现实的医学诊断基准,并提出了使用上游数据来通过各种表格神经网络体系结构来提高性能的方法指南。最后,我们为上游和下游特征集不同的情况提出了一种伪特征方法,在现实世界中,特定于表格的问题广泛。我们的代码可在https://github.com/levinroman/tabular-transfer-learning上找到。
translated by 谷歌翻译
我们展示了MapReader,一个在Python中编写的免费开源软件库,用于分析大地图集合(扫描或出生)。此库转换历史人员可以通过转动广泛的均匀地图设置到可搜索的主要源来使用映射的方式。 MapReader允许使用很少或没有计算机视觉专业知识的用户来通过Web服务器检索地图; ii)预处理并将它们分成补丁; iii)涂布补丁; iv)火车,微调和评估深度神经网络模型; v)创建有关地图内容的结构化数据。我们展示了MAPREADER如何使历史学家解释$ \ \左右16千世纪的军械调查地图表($ \大约30.5M补丁),将视觉标记转化为机器可读数据的挑战。我们展示了一个案例研究,重点是英国铁路基础设施和建筑物,如这些地图所示。我们还展示了MapReader管道的输出如何链接到我们用于评估的其他外部数据集以及丰富和解释结果。我们释放$ \大约62万美元手动注释的补丁,用于培训和评估模型。
translated by 谷歌翻译
我们利用最先进的机器学习方法和来自CFHT的十年的档案数据来预测来自环境条件和天文台操作参数的天文台图像质量(IQ)。具体而言,我们开发了数据特征之间复杂依赖性的准确和可解释的模型,并为CFHT的宽野相机,Megacam观察到IQ。我们的贡献是几倍。首先,我们收集,整理和重新处理CFHT科学家收集的几种不同数据集。其次,我们预测IQ的概率分布函数(PDF),实现预测中位数的$ \ sim0.07'$的平均绝对误差。第三,我们探讨了2013 - 14年安装的12个圆顶“通风口”的数据驱动,以加速来自圆顶的热空气的冲洗。我们与概率的生成建模结合使用认识和炼膜的不确定性,以确定是分布(ID)的候选通风调整;对于每个ID样本的最佳配置,我们预测所需观察时间的减少以实现固定的SNR。平均而言,减少是$ \ SIM12 \%$。最后,我们通过福谢值等级来缩放输入特征,以确定每个观察的最预测变量。我们的长期目标是构建可靠和实时模型,可以预测最佳的天文台操作参数来优化IQ。然后,我们可以将这些预测送入调度协议和预测性维护例程。我们预计这些方法将成为自动化天文台运营和维护的标准,即CFHT的继承者,Maunakea光谱探险家安装在未来十年。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译